STATS 300 A : Theory of Statistics Fall 2015 Lecture 3 —
نویسندگان
چکیده
Before discussing today’s topic matter, let’s take a step back and situate ourselves with respect to the big picture. As mentioned in Lecture 1, a primary focus of this course is optimal inference. As a first step toward reasoning about optimality, we began to examine which statistics of the data that we observe are actually relevant in a given inferential task. We learned about lossless data reduction and about the concept of sufficiency. We understood through our notions of statistical risk that lossless data reduction does just as well as the original model and that extraneous data can only hurt the model. We then examined a broad class of distributions, viz. the exponential families, and saw how they were intimately related to our notions of sufficiency. Using the concept of minimal sufficiency, we initiated a discussion of how data could be maximally compressed without losing information relevant to the inference task.
منابع مشابه
STATS 300 A : Theory of Statistics Fall 2015 Lecture 11 — October 27
This lemma allows us to find a minimax estimator for a particular tractable submodel, and then show that the worst-case risk for the full model is equal to that of the submodel (that is, the worst-case risk doesn’t rise as you go to the full model). In this case, using the Lemma, we can argue that the estimator we found is also minimax for the full model. This was similar to how we justified mi...
متن کاملSTATS 300 A : Theory of Statistics Fall 2015 Lecture 10 — October 22
X1, . . . , Xn iid ∼ N (θ, σ), with σ known. Our goal is to estimate θ under squared-error loss. For our first guess, pick the natural estimator X. Note that it has constant risk σ 2 n , which suggests minimaxity because we know that Bayes estimators with constant risk are also minimax estimators. However, X is not Bayes for any prior, because under squared-error loss unbiased estimators are Ba...
متن کاملStats 300 b : Theory of Statistics Winter 2018 Lecture 15 – February 27
Theorem 1. Let {Xn}n=1 ⊂ L∞(T ) be a sequence of stochastic processes on T . The followings are equivalent. (1) Xn converge in distribution to a tight stochastic process X ∈ L∞(T ); (2) both of the followings: (a) Finite Dimensional Convergence (FIDI): for every k ∈ N and t1, · · · , tk ∈ T , (Xn(t1), · · · , Xn(tk)) converge in distribution as n→∞; (b) the sequence {Xn} is asymptotically stoch...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2015